4/24/2017
Assistant Professor Nikolay Bliznyuk, Departments of Ag & Bio Engineering, Biostatistics and Statistics, University of Florida
An Hierarchical Bayesian Spatio-Temporal Model for Multi-Pathogen Transmission of Hand, Foot, and Mouth Disease
Mathematical modeling of infectious diseases plays an important role in the development and evaluation of intervention plans. These plans, such as the development of vaccines, are usually pathogen-specific, but laboratory confirmation of all pathogen-specific infections is rarely available. If an epidemic is a consequence of co-circulation of several pathogens, it is desirable to jointly model these pathogens in order to study the transmissibility of the disease. Our work is motivated by the hand, foot and mouth disease (HFMD) surveillance data in China. We build a hierarchical Bayesian multi-pathogen model by using a latent process to link the disease counts and the lab test data. Our model explicitly accounts for spatio-temporal disease patterns. The inference is carried out by an MCMC algorithm. We study operating characteristics of the algorithm on simulated data and apply it to the HFMD in China data set.
***
4/17/2017
Abdhi Sarkar, Department of Statistics and Probability, Michigan State University
Variable Selection for Spatial Data using Penalized Methods and its Application to Neuroimaging
To study real world applications of discrete data on a geographical domain we are still unable to express a likelihood for correlated data. We circumvent this by assuming a parametric structure on the moments of a multivariate random variable and use a quasi-likelihood approach. In this talk, I propose a method that is able to select relevant variables and estimate their corresponding coefficients simultaneously. Under increasing domain asymptotics after introducing a misspecified working correlation matrix that satisfies a certain mixing condition we show that this estimator possess the” oracle” property as first suggested by (Fan and Li, 2001) for the non-convex SCAD penalty. Several simulation results and a real data example are provided to illustrate the performance of our proposed estimator. In the analysis of Task- based fMRI a unique modelling scheme allows us to perform selection at the voxel level. We implement a penalized weighted least squares technique with a known separable spatial and temporal covariance structure, originally used for the study of genes in longitudinal analysis format using two penalty terms: one for selection and another for smoothing. The coordinate descent fast algorithm provides scope for a computationally less intensive method with a direct interpretation. A simulation study is presented to highlight the utility of the method.
***
4/3/17
Associate Professor Michael Levine, Department of Statistics, Purdue University
A regularization based approach to estimation of a two component semiparametric density mixture with a known component
We consider a semiparametric mixture of two univariate density functions where one of them is known while the weight and the other function are unknown. Such mixtures have a history of application to the problem of detecting differentially expressed genes under two or more conditions in microarray data. Until now, some additional knowledge about the unknown component (e.g. the fact that it belongs to a location family) has been assumed. As opposed to this approach, we do not assume any additional structure on the unknown density function. For this mixture model, we derive a new sufficient identi ability condition and pinpoint a speci c class of distributions describing the unknown component for which this condition is mostly satisfied. We also suggest a novel approach to estimation of this model that is based on an idea of applying a maximum smoothed likelihood to what would otherwise have been an ill-posed problem. We introduce an iterative MM (Majorization-Minimization) algorithm that estimates all of the model parameters. We establish that the algorithm possesses a descent property with respect to a log-likelihood objective functional and prove that the algorithm, indeed, converges.
Finally, we also illustrate the performance of our algorithm in a simulation study and using a real dataset.
***
3/27/17
Assistant Professor Amanda Mejia, Department of Statistics, Indiana University
A Bayesian General Linear Modeling Approach to Cortical Surface fMRI Data Analysis
Cortical surface functional magnetic resonance imaging (cs-fMRI) has recently experienced a rise in popularity relative to traditional 3-dimensional volumetric fMRI. In cs-fMRI data, the gray matter of the cortex is represented as a 2-dimensional surface manifold, and other tissue types such as white matter are discarded. Cs-fMRI offers several advantages over volumetric fMRI, including dimension reduction, removal of tissue types where neuronal activity does not occur, and improved alignment of cortical areas across multiple subjects. In addition, cs-fMRI allows for more meaningful smoothing and is more compatible with the common assumption of isotropy in spatial models, which states that spatial dependence between two locations is only a function of the distance between them. Although several Bayesian spatial models assuming isotropy and stationarity have been applied to volumetric fMRI data, the spatial dependence structure of this data may in reality be much more complex due to cortical folding and the presence of multiple tissue types. Additionally, these models often reduce computational burden by employing simple spatial covariance models, applying the model to a single slice at a time, or using approximate computational methods such as variational Bayes (VB). These models may also require the data to lie on a regular lattice structure, inhibiting their extension to cs-fMRI data. Since no Bayesian spatial model has currently been developed specifically for cs-fMRI data, most analyses continue to employ the general linear model (GLM) (Worsley and Friston, 1995), in which a linear regression model is fit separately at each location in the brain relating the observed fMRI time series to the expected neuronal response to a certain task or stimulus. At each location, a hypothesis test is then performed on the model coefficients to determine whether that location is “activated”. This presents a massive multiple comparisons problem that remains the subject of debate and controversy today (Eklund et al., 2016). The classical GLM approach also fails to properly account for spatial dependence in the activation amplitudes of neighboring voxels. In this paper, we propose a Bayesian GLM approach to estimating task activation with cs-fMRI data, which employs a class of sophisticated spatial processes to flexibly model latent fields of task activation. To perform the Bayesian computation, we use integrated nested Laplacian approximation (INLA), a highly accurate and computationally efficient technique (Rue et al., 2009). To identify regions of activation, we propose a novel joint posterior probability map (PPM) method, which eliminates the problem of multiple comparisons. Finally, we extend the existing spatial model from the single-subject to the multi-subject case, thus facilitating group-level inference. The method is validated and compared to the classical GLM through simulation studies and a motor task fMRI study from the Human Connectome Project.
***
3/20/17
Professor Nianjun Liu, Department of Epidemiology and Biostatistics, Indiana University
Associating Multivariate Quantitative Phenotypes with Genetic Variants in Family Samples
The recent development of sequencing technology allows identification of association between the whole spectrum of genetic variants and complex diseases. Over the past few years, a number of association tests for rare variants have been developed. Jointly testing for association between genetic variants and multiple correlated phenotypes may increase the power to detect causal genes in family-based studies, but familial correlation needs to be appropriately handled to avoid an inflated type I error rate. Here we propose a novel approach for multivariate family data using kernel machine regression (denoted as MF-KM) that is based on a linear mixed-model framework and can be applied to a large range of studies with different types of traits. In our simulation studies, the usual kernel machine test has inflated type I error rates when applied directly to familial data, while our proposed MF-KM method preserves the expected type I error rates. Moreover, the MF-KM method has increased power compared to methods that either analyze each phenotype separately while considering family structure or use only unrelated founders from the families. Finally, we illustrate our proposed methodology by analyzing whole-genome genotyping data from a lung function study.
***
3/6/17
Assistant Professor Jeremy Gaskins, Department of Bioinformatics & Biostatistics, University of Louisville
Bayesian methods for non-ignorable dropout in joint models in smoking cessation studies
Inference on data with missingness can be challenging, particularly if the knowledge that a measurement was unobserved provides information about its distribution. Our work is motivated by the Commit to Quit II study, a smoking cessation trial that measured smoking status and weight change as weekly outcomes. It is expected that dropout in this study was informative and that patients with missed measurements are more likely to be smoking, even after conditioning on their observed smoking and weight history. We jointly model the categorical smoking status and continuous weight change outcomes by assuming normal latent variables for cessation and by extending the usual pattern mixture model to the bivariate case. The model includes a novel approach to sharing information across patterns through a Bayesian shrinkage framework to improve estimation stability for sparsely observed patterns. To accommodate the presumed informativeness of the missing data in a parsimonious manner, we model the unidentified components of the model under a non-future dependence assumption and specify departures from missing at random through sensitivity parameters, whose distributions are elicited from a subjectmatter expert.
This is joint work with Michael Daniels (University of Texas, Austin) and Bess Marcus (UC San Diego).
***
2/27/17
Assistant Professor Wei Zheng, Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis
Informative sampling of large database
For many tasks of data analysis, a large database of explanatory variables is readily available, however, the responses are missing and expensive to obtain. A natural remedy is to judiciously select a sample of the data, for which the responses are to be measured. In this paper, we adopt the classical criteria in design of experiments to quantify the information of a given sample. Then, we provide a theoretical justification for approximating the optimal sample problem by a continuous problem, for which fast algorithms can be further developed with the guarantee of global convergence. Our approach exhibits the following features: (i) The statistical efficiency of any candidate sample can be evaluated without knowing the exact optimal sample; (ii) It can be applied to a very wide class of statistical models; (iii) It can be integrated with a broad class of information criteria; (iv) It is scalable for big data.
***
2/20/17
Assistant Professor Chung-chieh Shan, School of Informatics and Computing, Indiana University
Exact Bayesian inference by symbolic disintegration
Our research group has been automating calculations on probability distributions represented as programs. After briefly introducing and demonstrating our work, I'll spend most of my time on one particular operation, disintegration.
Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes's rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes's rule says only that the unknown times zero is zero. To infer a posterior distribution from a zero-probability observation, the statistical notion of _disintegration_ tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way -- without sacrificing accuracy or performance.
***
2/13/17
Dr. James Townsend, Rudy Professor, Department of Psychological and Brain Sciences, Indiana University
Assessing the Capacity and Architecture in Perception, Cognition, and Action: An Introduction to Systems Factorial Technology (SFT)
Systems Factorial Technology grew out of years of analysis of elementary psychological processes, how even mathematical models of them can mimic one another’s predictions and, fortunately, how they can be rigorously tested in the proper theory-driven experimental methodologies. Applications now range from binocular dot perception, to memory and visual search, to auditory and bi-modal perception, and categorization and include studies of dyslexia, schizophrenia, Asperger’s syndrome, so-called super-taskers, and human-computer interaction. This is a tutorial on basic conceptions and testable predictions, emphasizing workload capacity and elemental mental architectures such as parallel vs. serial processing. Although this talk will not emphasize our statistics methodologies, which have been developed over the past couple of decades, reference to SFT-specific tool boxes based on either Neyman-Pearson or Bayesian principles will be given. Two experimental examples will be discussed, one on the Stroop effect and the other on the configural feature processing of faces.
***
2/6/17
Associate Professor Jaroslaw Harezlak, Department of Epidemiology and Biostatistics, Indiana University School of Public Health
Laplacian-based statistical regularization approach: association of gray matter imaging markers with alcoholism incorporating structural connectivity information
Majority of multimodal neuroimaging studies are analyzed separately for each modality.
However, statistical methods that simultaneously assess multimodal data provide a more integrative and comprehensive understanding of the brain. We propose an extension to the statistical regularization methods in the linear model setting with a Laplacian-based penalty operator. Model parameters are estimated by a unified approach directly incorporating structural connectivity information into the estimation by exploiting the joint eigenproperties of the predictors and the penalty operator. We present the closed-form solutions for the estimators, test their properties via a simulation study and apply them to find the best predictive imaging markers of alcohol drinking phenotypes. Introducing a priori information minimized spurious findings by assigning penalty weights in such a way that highly connected regions associated with the outcome were less penalized than other regions that had no association with the outcome. Our future work will incorporate functional connectivity and finer cortical brain parcellation in the penalty operator definition.